1,566 research outputs found

    Practical End-to-End Verifiable Voting via Split-Value Representations and Randomized Partial Checking

    Get PDF
    We describe how to use Rabin's "split-value" representations, originally developed for use in secure auctions, to efficiently implement end-to-end verifiable voting. We propose a simple and very elegant combination of split-value representations with "randomized partial checking" (due to Jakobsson et al. [16])

    The subgraph homeomorphism problem

    Get PDF
    AbstractWe investigate the problem of finding a homeomorphic image of a “pattern” graph H in a larger input graph G. We view this problem as finding specified sets of edge disjoint or node disjoint paths in G. Our main result is a linear time algorithm to determine if there exists a simple cycle containing three given nodes in G (here H is a triangle). No polynomial time algorithm for this problem was previously known. We also discuss a variety of reductions between related versions of this problem and a number of open problems

    Practical Provably Correct Voter Privacy Protecting End-to-End Voting Employing Multiparty Computations and Split Value Representations of Votes

    Get PDF
    Continuing the work of Rabin and Rivest we present another simple and fast method for conducting end to end voting and allowing public verification of correctness of the announced vote tallying results. This method was referred to in as the SV/VCP method. In the present note voter privacy protection is achieved by use of a simple form of Multi Party Computations (MPC). At the end of vote tallying process, random permutations of the cast votes are publicly posted in the clear, without identification of voters or ballot ids. Thus vote counting and assurance of correct form of cast votes are directly available. Also, a proof of the claim that the revealed votes are a permutation of the concealed cast votes is publicly posted and verifiable by any interested party. Advantages of this method are: Easy understandability by non-­‐cryptographers, implementers and ease of use by voters and election officials. Direct handling of complicated ballot forms. Independence from any specialized primitives. Speed of vote-­‐tallying and correctness proving: elections involving a million voters can be tallied and proof of correctness of results posted within a few minutes

    Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information

    Get PDF
    Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information -- either intentionally released by the person herself, or unintentionally leaked or published by third parties, often with detrimental effects on the individual's privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request is eligible and lawful. We propose a universal framework Oblivion to support the automation of the right to be forgotten in a scalable, provable and privacy-preserving manner. First, Oblivion enables a user to automatically find and tag her disseminated personal information using natural language processing and image recognition techniques and file a request in a privacy-preserving manner. Second, Oblivion provides indexing systems with an automated and provable eligibility mechanism, asserting that the author of a request is indeed affected by an online resource. The automated ligibility proof ensures censorship-resistance so that only legitimately affected individuals can request the removal of corresponding links from search results. We have conducted comprehensive evaluations, showing that Oblivion is capable of handling 278 removal requests per second, and is hence suitable for large-scale deployment

    A Modular Voting Architecture ("Frogs")

    Get PDF
    We present a “modular voting architecture” in which “vote generation” is performed separately from “vote casting.

    List Processing in Real Time on a Serial Computer

    Get PDF
    Key Words and Phrases: real-time, compacting, garbage collection, list processing, virtual memory, file or database management, storage management, storage allocation, LISP, CDR-coding, reference counting. CR Categories: 3.50, 3.60, 373, 3.80, 4.13, 24.32, 433, 4.35, 4.49 This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.A real-time list processing system is one in which the time required by each elementary list operation (CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical list processing systems such as LISP do not have this property because a call to CONS may invoke the garbage collector which requires time proportional to the number of accessible cells to finish. The space requirement of a classical LISP system with N accessible cells under equilibrium conditions is (1.5+μ)N or (1+μ)N, depending upon whether a stack is required for the garbage collector, where μ>0 is typically less than 2. A list processing system is presented which: 1) is real-time--i.e. T(CONS) is bounded by a constant independent of the number of cells in use; 2) requires space (2+2μ)N, i.e. not more than twice that of a classical system; 3) runs on a serial computer without a time-sharing clock; 4) handles directed cycles in the data structures; 5) is fast--the average time for each operation is about the same as with normal garbage collection; 6) compacts--minimizes the working set; 7) keeps the free pool in one contiguous block--objects of nonuniform size pose no problem; 8) uses one phase incremental collection--no separate mark, sweep, relocate phases; 9) requires no garbage collector stack; 10) requires no "mark bits", per se; 11) is simple--suitable for microcoded implementation. Extensions of the system to handle a user program stack, compact list representation ("CDR-coding"), arrays of non-uniform size, and hash linking are discussed. CDR-coding is shown to reduce memory requirements for N LISP cells to ≈(I+μ)N. Our system is also compared with another approach to the real-time storage management problem, reference counting, and reference counting is shown to be neither competitive with our system when speed of allocation is critical, nor compatible, in the sense that a system with both forms of garbage collection is worse than our pure one.MIT Artificial Intelligence Laboratory Department of Defense Advanced Research Projects Agenc
    corecore